Regularized quasi-monotone method for stochastic optimization

نویسندگان

چکیده

We adapt the quasi-monotone method, an algorithm characterized by uniquely having convergence quality guarantees for last iterate, composite convex minimization in stochastic setting. For proposed numerical scheme we derive optimal rate of $$\text{ O }\left( \frac{1}{\sqrt{k+1}}\right)$$ terms rather than on average as it is standard subgradient methods. The theoretical guarantee individual regularized method confirmed experiments $$\ell _1$$ -regularized robust linear regression.

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Dual Averaging Method for Regularized Stochastic Learning and Online Optimization

We consider regularized stochastic learning and online optimization problems, where the objective function is the sum of two convex terms: one is the loss function of the learning task, and the other is a simple regularization term such as l1-norm for promoting sparsity. We develop a new online algorithm, the regularized dual averaging (RDA) method, that can explicitly exploit the regularizatio...

متن کامل

A Stochastic Quasi-Newton Method for Online Convex Optimization

We develop stochastic variants of the wellknown BFGS quasi-Newton optimization method, in both full and memory-limited (LBFGS) forms, for online optimization of convex functions. The resulting algorithm performs comparably to a well-tuned natural gradient descent but is scalable to very high-dimensional problems. On standard benchmarks in natural language processing, it asymptotically outperfor...

متن کامل

A Stochastic Quasi-Newton Method for Large-Scale Optimization

The question of how to incorporate curvature information in stochastic approximation methods is challenging. The direct application of classical quasiNewton updating techniques for deterministic optimization leads to noisy curvature estimates that have harmful effects on the robustness of the iteration. In this paper, we propose a stochastic quasi-Newton method that is efficient, robust and sca...

متن کامل

Regularized Newton method for unconstrained convex optimization

We introduce the regularized Newton method (rnm) for unconstrained convex optimization. For any convex function, with a bounded optimal set, the rnm generates a sequence that converges to the optimal set from any starting point. Moreover the rnm requires neither strong convexity nor smoothness properties in the entire space. If the function is strongly convex and smooth enough in the neighborho...

متن کامل

Quasi-monte Carlo Strategies for Stochastic Optimization

In this paper we discuss the issue of solving stochastic optimization problems using sampling methods. Numerical results have shown that using variance reduction techniques from statistics can result in significant improvements over Monte Carlo sampling in terms of the number of samples needed for convergence of the optimal objective value and optimal solution to a stochastic optimization probl...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: Optimization Letters

سال: 2022

ISSN: ['1862-4480', '1862-4472']

DOI: https://doi.org/10.1007/s11590-022-01931-4